Notes � MAS962 Pustejovsky

Greg Detre

Tuesday, October 08, 2002

 

Reading, Pustejovsky

see scribbles on printout

Excerpts, web

Lexicographic database representation.htm

http://coral.lili.uni-bielefeld.de/Classes/Winter98/ComLex/GibbonElsnet/elsnetbook.dg/node15.html

Lexical semantic microstructure, as in Pustejovsky's Generative Lexicon Theory (for feature structure details, see [Pustejovsky 1995]).

         Qualia structure (semantic properties),

         Event Structure (temporal properties),

         Argument Structure (predicate-argument relations),

         Inheritance structure (generalisation macrostructure).

Generative Lexicon Theory.htm

http://budling.nytud.hu/~kalman/reading/siggen94/node5.html

GLT can be briefly characterized as a system which involves four levels of representation which are connected by a set of generative devices accounting for a compositional interpretation of words in context, namely: the argument structure which specifies the predicate argument structure for a word and the conditions under which the variables map to syntactic expressions; the event structure giving the particular event types such as S (state), P (process) or T (transition); the qualia structure distributed among four roles FORM (formal), CONST (constitutive), TELIC and AGENT (Agentive); and the inheritance structure which involves two different kinds of mechanisms:

         the fixed inheritance mechanism, which is basically a fixed network of the traditional isa relationship found in AI, enriched with the different roles of the qualia structure;

         the projective inheritance mechanism, which can be intuitively characterized as a way of triggering semantically related concepts which define for each role the projective conclusion space ( PCS). For instance in the PCS of the telic and agentive roles of book we will find at least the following predicates: read, reissue, annotate, ... and write, print, bind, ... (respectively).

The most important of the generative devices connecting these four levels is a semantic operation called type coercion which ``captures the semantic relatedness between syntactically distinct expressions'' (Pustejovsky, 1994a). Another notion introduced is that of lexical conceptual paradigms ( LCPs), as formalized in (Pustejovsky, 1994b). We will say that the aim of an LCP is to capture the conceptual regularities across languages in terms of cognitive invariants, like ``physical-object'', ``aperture'', ``natural kind'' and alternations such as ``container/containee'', etc. Moreover, the possible syntactic projections are associated with LCPs. For instance, one can say ``I left a leaflet in/inside the book at the page I want you to read' as book is an information-phys_obj-container whereas for instance one cannot say ``I put the book in the top of the table'' as ``the top of the table'' is a surface and not a container.

In the following, we will focus on two basic mechanisms of GLT, which allow us to bridge the word usage gap, that is, on a scale of lexical specificity, from free-combining words to idioms. These are:

(1) Reference to the qualia structure: By giving every category the ability to make reference to specific semantic functions, we are encoding the ``semantic basis'' of word usage information with a lexical item. This gives rise to semantic collocations.

(2) Cospecification: This is the basic means of encoding specific usage information in the form of either coherent argument subtypes, or already lexicalized phrases, giving rise to idiosyncrasies and idioms, respectively.

Laboratory for Linguistics and Computation -- Brandeis University.htm

http://www.cs.brandeis.edu/~llc/genlex.html

The focus of research in Generative Lexicon Theory is on the computational and cognitive modelling of natural language meaning. More specifically, the investigation is in how words and their meanings combine to make meaningful texts. This research has focused on developing a lexically oriented theory of semantics, based on a methodology making use of formal and computational semantics. That is, we are looking at how word meaning in natural language might be characterized both formally and computationally, in order to account for both the subtle use of words in different sentences, as well as the creative use of words in novel contexts. One of the major goals in our current research, therefore, is to study polysemy, ambiguity, and sense shifting phenomena in different languages. .

Federica Busa and Pierrette Bouillon.htm

http://www.cs.brandeis.edu/~federica/articles/book-summary.html

The framework of Generative Lexicon theory (Pustejovsky, 1995) is built on the assumption that lexical items reflect the creative aspect of language use and are able to combine to create new meanings in different contexts. � Much of the earlier research on the generative properties of the lexicon has provided substantial evidence that the study of word meaning requires a richer notion of lexical representation and compositionality, which avoid an unbounded enumeration of lexical senses for polysemous words. Taking these assumptions as the starting point, we intend to present the subject of word meaning and creative use of language from two angles. One which provides additional evidence for the generative lexicon approach, the other which discusses its repercussions and its pertinence to other domains.

Pustejovsky's Generative Lexicon Model.htm

http://www-users.cs.york.ac.uk/~mdeboni/research/generative_lexicon.html

Pustejovsky's lexicon model attempts to explain semantic problems such as the polymorphic nature of language and the creative use of words in novel contexts.

This is done by organising words into a generative lexicon of core word senses. This core set is then combined through a number of "generative devices" to obtain the larger set of word senses that makes up the lexicon of a language.

Limitations of an enumerative lexicon

The idea of a generative lexical model is contrasted to a more usual sense enumerative lexicon, where each word has a literal meaning and lexical ambiguity is treated by multiple listing of words. The enumerative lexicon approach fails to explain a number of linguistic phenomena:

Semantic representation in the generative lexicon model

According to the generative lexicon approach, lexical items are decomposed into structured forms (or templates) that provide the framework for the composition of lexical meanings.

A generative lexicon has at least four levels of semantic representations:

The semantics of a lexical item a can therefore be defined as a structure composed by the following components:

a = < A, E, Q, I >

where A is the argument structure, E is the specification of the event type, Q provides the binding of A and E in the qualia structure and I is an embedding transformation that determines what information is inheritable from the global lexical structure.

Argument Structure

The argument structure is made of:

Event Structure

The event structure defines the order in which events occur (both sequential and non-sequential) and the relation between an event and its subevents. This structure permits a thorough analysis of meaning of verbs, distinguishing, for example, between accomplishment verbs and achievement verbs.

Qualia Structure

Qualia structure is derived in part from the Aristotelian view of word meaning which identified "modes of explanation" (aitiai). In particular, the following aspects of meaning (qualia) are identified:

Combining types

Semantic types may be combined to become a complex type through the use of the "dot" operator. A "type constructor" may create a complex type for a term a which carries senses s1 and s2, thus yielding a lexical paradigm containing the senses {s1.s2, s1, s2}, i.e. the combination of s1 and s2, s1 and s2. Thus, for example, a noun such as door, which is made of the combination of the types phys_obj and aperture will have all the following types available for expression: {phys_obj.aperture, phys_obj, aperture), thus illustrating the fact that doors are sometimes referred to simply as physical objects, sometimes as apertures, and sometimes as a combination of the two.

Generative devices

The levels of semantic representation are connected by a set of generative devices which provide the compositional interpretation of words in context.

These generative devices include the following semantic transformations:

Inference

Lexical inference is not examined in any detail by Pustejovsky, but he suggests that coercion and co-composition could be used as a type of "enthemymic" inference, a type of inference first analysed by Aristotle in his Rhetoric. An enthymeme is a set of two propositions added as argument, such that the addition of a third proposition results in a categorial syllogism, e.g. "James is a Texan, therefore he is tall", where the additional proposition is "all Texans are tall". This could be applied to the generative lexicon, for example in a phrase like "Steven King began a new novel", with the inference "Steven King began to write a new novel" given by the information contained in the noun phrase "Steven King" and the "agentive" value of novel which is (informally) "someone writes a novel".

However Pustejovsky merly suggests the possibility of inference, without examining the problem in any depth.

Lexical Knowledge and Commonsense Knowledge

The lexicon presented by Pustejovsky appears to contain a considerable amount of information that a linguist might consider better suited to a knowledge base, and the distinction between linguistic knowledge and commonsense knowledge appears to be very "fuzzy". However, it is argued that although there appears to be a continuum between the two types of knowledge, in some cases linguistic behaviour is best treated as language specific knowledge, not in terms of general inference mechanisms.

Limitations of the Generative Lexicon approach

One major drawback of the generative lexicon approach is that it is not at all clear how the information in the lexicon would be gathered. In common with most generative linguistics approaches, Pustejovsky does not draw his examples from an analysis of the English language "as spoken", i.e. he does not systematically analyse the linguistic data available, but rather seems to use his "intuition" as a native speaker to decide what a native speaker would or would not do (see, for example, Seuren 1998, pp.260ff for a critique of the Chomskian approach to empirical evidence). It is unclear if a systematic approach to the writing of a generative lexicon would be possible, and even the theory is incomplete (for example, the lexical inheritance structure is not examined in any detail).

Generative Lexicon for Information Extraction?

This approach, however, does look promising, and certainly allows for a very rich semantic lexicon which explains how objects may be "viewed" systematically from different angles (e.g. a book may be something to read or a physical object that falls on one's foot). The idea of using the lexicon in order to make inferences on given propositions, even though only mentioned in passing, appears very interesting and with much potential for applications such as information extraction where it is necessary to infer as much as possible from the given sentences.

Generative Lexicon implementation

A partial implementation of the generative lexicon approach is given by [Buitelaar 98] who automatically generates an ontology and semantic database of semantic types from Wordnet [Miller et al. 1990]. Buitellar's method attempts to generate a set of systematically related senses constituting basic "types" which are then connected to produce more complex types. No further anaysis is made of these basic types, however, and the characteristics of the types (argument sturcture, event structure, qualia structure, lexical structure) are ignored.

 

Discarded

Thus far, Pustejovsky seems to be following in Quillian�s footsteps, whose semantic networks were an attempt to represent lexical memory as �a complex network of elements and associations interconnecting them�. In drawing this parallel, we can see one of the major contrasts between Pustejovsky�s approach and Wordnet�s, in that he is actively trying to relate semantic information to syntactic form.

However, his ideas in this paper are heavily based on earlier ideas in the field of Generative Lexicon Theory, which he fails to adequately introduce.

. As the extremely helpful discussion at:

http://www-users.cs.york.ac.uk/ ~mdeboni/research/generative_lexicon.html

raises, Pustejovsky�s model blurs the distinction between a linguistic knowledge base and commonsense knowledge

 

Questions

join semi-lattice???

psycholinguistic evidence???

formalise/add-on within WN???

how much detailed work is necessary for the whole English language??? might the process be algorithmised after a certain point, or done statistically???

does it allow multiple inheritance??? complex/dot object???

could even this system actually capture our concepts in all their glory, or does something about language add to the concepts it expresses/is based on???

do you rely on anything more than intuition in your types etc.???

e.g. entity/event/quality, or the Aristotelian 4-part qualia structure�

what�s the difference between a type structure for concepts and for natural language???

apart from being a bit differently categorised, can we see this as essentially a derivation of Quillian�s semantic nets???

is this GL??? what (else) is GL??? what else does he say in his book???

complex vs functional???

linguistic generalisations vs metaphysical considerations???

how would he represent: colours, numbers, fucntion words, prepositions, adverbs, linguistic relations (e.g. the concept of a �sentence�)

http://www-users.cs.york.ac.uk/~mdeboni/research/generative_lexicon.html

dictionary vs knowledge base vs common sense reasoning???